我们在单峰偏好下研究社会选择环境中的公平性。在先前的作品中,已经对单峰领域中社会选择规则的构建和表征进行了广泛的研究。实际上,在单峰域中,众所周知,一致和防止策略的确定性规则必须是Min-Max规则,并且那些满足匿名的规则必须是中位数规则。此外,满足这些属性的随机社会选择规则已被证明是各自确定性规则的凸组合。我们通过在社会选择中包括公平考虑因素来非凡地增加了这一结果。我们的研究直接解决了代理人群体的公平性。为了研究群体对象,我们根据性别,种族和位置等自然属性考虑了代理商的现有分区分为逻辑群体。为了捕捉每个小组的公平性,我们介绍了小组匿名的概念。为了捕捉整个群体的公平性,我们提出了一个薄弱的观念以及公平的强烈概念。拟议的公平概念事实证明是对现有的个人财产概念的自然概括,此外,与现有的团体财产概念不同,对严格的顺序偏好提供了非平凡的结果。我们提供了满足群体对象的随机社会选择规则的两个单独的特征:(i)直接表征(ii)极端表征(作为公平确定性社会选择规则的凸组合)。我们还探索了没有群体并提供实现个人财产的规则的特殊情况。
translated by 谷歌翻译
The relevance of machine learning (ML) in our daily lives is closely intertwined with its explainability. Explainability can allow end-users to have a transparent and humane reckoning of a ML scheme's capability and utility. It will also foster the user's confidence in the automated decisions of a system. Explaining the variables or features to explain a model's decision is a need of the present times. We could not really find any work, which explains the features on the basis of their class-distinguishing abilities (specially when the real world data are mostly of multi-class nature). In any given dataset, a feature is not equally good at making distinctions between the different possible categorizations (or classes) of the data points. In this work, we explain the features on the basis of their class or category-distinguishing capabilities. We particularly estimate the class-distinguishing capabilities (scores) of the variables for pair-wise class combinations. We validate the explainability given by our scheme empirically on several real-world, multi-class datasets. We further utilize the class-distinguishing scores in a latent feature context and propose a novel decision making protocol. Another novelty of this work lies with a \emph{refuse to render decision} option when the latent variable (of the test point) has a high class-distinguishing potential for the likely classes.
translated by 谷歌翻译